4 research outputs found

    Collaborative Federated Learning For Healthcare: Multi-Modal COVID-19 Diagnosis at the Edge

    Get PDF
    Despite significant improvements over the last few years, cloud-based healthcare applications continue to suffer from poor adoption due to their limitations in meeting stringent security, privacy, and quality of service requirements (such as low latency). The edge computing trend, along with techniques for distributed machine learning such as federated learning, have gained popularity as a viable solution in such settings. In this paper, we leverage the capabilities of edge computing in medicine by analyzing and evaluating the potential of intelligent processing of clinical visual data at the edge allowing the remote healthcare centers, lacking advanced diagnostic facilities, to benefit from the multi-modal data securely. To this aim, we utilize the emerging concept of clustered federated learning (CFL) for an automatic diagnosis of COVID-19. Such an automated system can help reduce the burden on healthcare systems across the world that has been under a lot of stress since the COVID-19 pandemic emerged in late 2019. We evaluate the performance of the proposed framework under different experimental setups on two benchmark datasets. Promising results are obtained on both datasets resulting in comparable results against the central baseline where the specialized models (i.e., each on a specific type of COVID-19 imagery) are trained with central data, and improvements of 16\% and 11\% in overall F1-Scores have been achieved over the multi-modal model trained in the conventional Federated Learning setup on X-ray and Ultrasound datasets, respectively. We also discuss in detail the associated challenges, technologies, tools, and techniques available for deploying ML at the edge in such privacy and delay-sensitive applications.Comment: preprint versio

    An active learning method for diabetic retinopathy classification with uncertainty quantification

    No full text
    In recent years, deep learning (DL) techniques have provided state-of-the-art performance in medical imaging. However, good quality (annotated) medical data is in general hard to find due to the usually high cost of medical images, limited availability of expert annotators (e.g., radiologists), and the amount of time required for annotation. In addition, DL is data-hungry and its training requires extensive computational resources. Furthermore, DL being a black-box method lacks transparency on its inner working and lacks fundamental understanding behind decisions made by the model and consequently, this notion enhances the uncertainty on its predictions. To this end, we address these challenges by proposing a hybrid model, which uses a Bayesian convolutional neural network (BCNN) for uncertainty quantification, and an active learning approach for annotating the unlabeled data. The BCNN is used as a feature descriptor and these features are then used for training a model, in an active learning setting. We evaluate the proposed framework for diabetic retinopathy classification problem and demonstrate state-of-the-art performance in terms of different metrics. Graphical abstract: [Figure not available: see fulltext.]. 2022, International Federation for Medical and Biological Engineering.Adeel Razi is affiliated with The Wellcome Centre for Human Neuroimaging supported by core funding from Wellcome (203147/Z/16/Z).Scopu
    corecore